Learning phonetic distinctions from speech signals
نویسندگان
چکیده
Previous work has shown that connectionist learning systems can simulate important aspects of the categorization of speech sounds by human and animal listeners. Training is on representations of synthetic, exemplar voiced and unvoiced stop consonants passed through a computational model of the auditory periphery. In this work, we use the modern inductive inference technique of support vector machines (SVMs) as the learning system. Visualization of the SVM’s weight vector reveals what has been learned about the voiced/unvoiced distinction.
منابع مشابه
Emergence of category-level sensitivities in non-native speech sound learning
Over the course of development, speech sounds that are contrastive in one's native language tend to become perceived categorically: that is, listeners are unaware of variation within phonetic categories while showing excellent sensitivity to speech sounds that span linguistically meaningful phonetic category boundaries. The end stage of this developmental process is that the perceptual systems ...
متن کاملLearning to recognize talkers from natural, sinewave, and reversed speech samples.
In 5 experiments, the authors investigated how listeners learn to recognize unfamiliar talkers and how experience with specific utterances generalizes to novel instances. Listeners were trained over several days to identify 10 talkers from natural, sinewave, or reversed speech sentences. The sinewave signals preserved phonetic and some suprasegmental properties while eliminating natural vocal q...
متن کاملWith Referential Cues, Infants Successfully Use Phonetic Detail in Word Learning
The relation between speech perception and word learning in infancy has become a focus for research in early language acquisition (e.g., Fikkert, 2005; Jusczyk & Aslin, 1995; Swingley & Aslin, 2002; Werker, Fennell, Corcoran & Stager, 2002). Considerable attention has been paid to a surprising discrepancy. Although infants’ performance in speech discrimination tasks reveals their sensitivity to...
متن کاملConcurrent Constraint Programming and Tree-Based Acoustic Modelling
The design of acoustic models is key to a reliable connection between acoustic waveform and linguistic message in terms of individual speech units. We present an original application of concurrent constraint programming in this important area of spoken language processing. The application presented here employs concurrent constraint programming – represented by Mozart/Oz [1] – to overcome the p...
متن کاملNative language governs interpretation of salient speech sound differences at 18 months.
One of the first steps infants take in learning their native language is to discover its set of speech-sound categories. This early development is shown when infants begin to lose the ability to differentiate some of the speech sounds their language does not use, while retaining or improving discrimination of language-relevant sounds. However, this aspect of early phonological tuning is not suf...
متن کامل